Ch.5 Spans and Basis

Return to TOC

Spans

all linear combinations of vectors from a subset SS of a vector space
[S]={c1s1++cnsnc1,...,cnR and s1,...,snS}[S]=\{c_1\vec{s}_1+\cdots+c_n\vec{s}_n|c_1,...,c_n\in\mathbb{R}\text{ and }\vec{s}_1,...,\vec{s}_n\in S\}
notation can be [S][S] or 'span(S)\text{span}(S)' or 'sp(S)\text{sp}(S)'
If the subset S=S=\empty, then [S]={0}[S]=\{\vec{0}\} is the trivial subspace

Example 5.1

S={(12)}S=\left\{\begin{pmatrix}1\\2\end{pmatrix}\right\}
[S]={a(12)aR}[S]=\left\{a\begin{pmatrix}1\\2\end{pmatrix}|a\in\mathbb{R}\right\}

Example 5.2

S={(120),(210)}S=\left\{\begin{pmatrix}1\\2\\0\end{pmatrix},\begin{pmatrix}2\\1\\0\end{pmatrix}\right\} is a subset of R3\mathbb{R}^3
The span is the xyxy-plane since any vector (xy0)\begin{pmatrix}x\\y\\0\end{pmatrix} can be made with a linear combination of vectors in SS

Span of a subset is a subspace

Proof

If SS is empty, the span is the trivial subspace.
If SS is not empty, we only need to check that the span [S][S] is closed under linear combinations of pairs of vectors.
For siS\vec{s}_i\in S and ck,p,rRc_k,p,r\in\mathbb{R},
p(c1s1++cnsn)+r(cn+1sn+1+cmsm)=pc1s1++pcnsn+rcn+1sn+1++rcmsmp\cdot(c_1\vec{s}_1+\cdots+c_n\vec{s}_n)+r\cdot(c_{n+1}\vec{s}_{n+1}+c_m\vec{s}_m)\\=pc_1\vec{s}_1+\cdots+pc_n\vec{s}_n+rc_{n+1}\vec{s}_{n+1}+\cdots+rc_m\vec{s}_m
Here we have two linear combinations of SS, i.e. two vectors from [S][S]. The linear combination of those two vectors yields another linear combination of sis_i, which is clearly a linear combination of vectors in SS, so this is also part of [S][S]. Thus, [S][S] is closed under linear combinations and is a subspace.
(Note some of the sjs_j's may be the same, but they can be combined and still yield a linear combination of the elements)

Spanning set for subspace WW is a subset SWS\subseteq W such that W=[S]W=[S] In other words, the subset SS that spans WW is the spanning set interest is in *smallest* subset

Finding Spanning Sets

Example 5.3

Start with subset of R3\mathbb{R}^3
P={(xyz)|x+y+z=0}P=\left\{\begin{pmatrix}x\\y\\z\end{pmatrix}\middle|x+y+z=0\right\}
Parametrize x+y+z=0x=yzx+y+z=0\rightarrow x=-y-z to get
P={y(110)+z(101)|y,zR}P=\left\{y\begin{pmatrix}-1\\1\\0\end{pmatrix}+z\begin{pmatrix}-1\\0\\1\end{pmatrix}\middle|y,z\in\mathbb{R}\right\}
Clearly, this is the span for the set
{(110),(101)}\left\{\begin{pmatrix}-1\\1\\0\end{pmatrix},\begin{pmatrix}-1\\0\\1\end{pmatrix}\right\}

Example 5.4

L={(xyz)|xy+z=0 and x+2z=0}L=\left\{\begin{pmatrix}x\\y\\z\end{pmatrix}\middle|x-y+z=0\text{ and }x+2z=0\right\}
row reduce to get y=zy=-z and x=2zx=-2z
So the spanning set is {(211)}\left\{\begin{pmatrix}-2\\-1\\1\end{pmatrix}\right\} and the span is {z(211)|zR}\left\{z\begin{pmatrix}-2\\-1\\1\end{pmatrix}\middle|z\in\mathbb{R}\right\}

Example 5.5

Consider the space of quadratic polynomials P2={ax2+bx+ca,b,cR}\mathcal{P}_2=\{ax^2+bx+c|a,b,c\in\mathbb{R}\}
The spanning set is {x2,x,1}\{x^2,x,1\}
linear polynomials and constant polynomials are also subspaces with similar spans
Suppose the space was instead {ax2+bx+ca+b+c=0}\{ax^2+bx+c|a+b+c=0\}
parametrize as a=bca=-b-c and the space is
{bx2cx2+bx+cb,cR}={b(xx2)+c(1x2)b,cR}\{-bx^2-cx^2+bx+c|b,c\in\mathbb{R}\}=\{b(x-x^2)+c(1-x^2)|b,c\in\mathbb{R}\}
so the span is [{xx2,1x2}][\{x-x^2,1-x^2\}]

Example 5.6

Find the spanning set of the vector space
{(a+ba2b3a5a+b) | a,bR}\left\{\begin{pmatrix}a+b&a-2b\\3a&5a+b\end{pmatrix}\space\middle|\space a,b\in\mathbb{R}\right\}


Separate them into
{a(1135)+b(1201) | a,bR}\left\{a\begin{pmatrix}1&1\\3&5\end{pmatrix}+b\begin{pmatrix}1&-2\\0&1\end{pmatrix}\space\middle|\space a,b\in\mathbb{R}\right\}
The spanning set is clearly
{(1135),(1201)}\left\{\begin{pmatrix}1&1\\3&5\end{pmatrix},\begin{pmatrix}1&-2\\0&1\end{pmatrix}\right\}

Subspaces are naturally described as spans

Linear Independence

a set of vectors VV in vector space is linearly independent if no vV\vec{v}\in V is a linear combination of other vectors in VV
A subset SS of a vector space is linearly independent iff the only linear relationship c1s1++cnsn=0c_1\vec{s}_1+\cdots+c_n\vec{s}_n=\vec{0} is (0,...,0)(0,...,0)
So, to determine linear independence, solve the equation
c0s0++cnsn=0c_0\vec{s}_0+\cdots+c_n\vec{s}_n=\vec{0}
and if there is a non-trivial solution, it is linearly dependent
In row echelon form, every 00 row is a linear combination of the other rows; similarly, every non-zero row is not a linear combination of other rows

If VV is a vector space, SS is a subset of that space, and vS\vec{v}\in S,
[S{v}]=[S][S\cup\{\vec{v}\}]=[S] iff v[S]\vec{v}\in [S]
In other words, adding a vector already in the span doesn't change the span, and removing a linearly dependent vector doesn't change the span

A set SS is linearly independent iff removing any vS\vec{v}\in S shrinks the span

Any finite set in a vector space has a linearly independent subset with the same span
Proof: just keep removing dependent vectors until its linearly independent

A subset is linearly dependent iff some si\vec{s}_i is a linear combination of the vectors before it

Proof

The set containing only the first element is linearly independent. Keep adding one at a time in order until you get a linearly dependent set. The last vector added is a linear combination of the ones before it. Remove it. Keep adding one at a time until you get a linearly dependent set. The last vector added is a linear combination of the ones before it. Remove it. Continue until you covered the whole set. Since you added them in order, clearly all the vectors that are a linear combination of the others are linear combinations of the ones before it.


Explanation using a real set
The set
{(123),(101),(426),(347),(021),(266),(144)}\left\{\begin{pmatrix}1\\2\\3\end{pmatrix}, \begin{pmatrix}1\\0\\1\end{pmatrix}, \begin{pmatrix}4\\2\\6\end{pmatrix}, \begin{pmatrix}3\\4\\7\end{pmatrix}, \begin{pmatrix}0\\2\\1\end{pmatrix}, \begin{pmatrix}2\\6\\6\end{pmatrix}, \begin{pmatrix}1\\4\\4\end{pmatrix}\right\}
is linearly dependent. Call them v1,...,v7\vec{v}_1,...,\vec{v}_7
Start with v1\vec{v}_1. Clearly this is linearly independent because theres only one vector.
Add v2\vec{v}_2. The set {v1,v2}\{\vec{v}_1,\vec{v}_2\} is linearly independent
Add v3\vec{v}_3. Now, v3=v1+3v2\vec{v}_3=\vec{v}_1+3\vec{v}_2, so the set {v1,v2,v3}\{\vec{v}_1,\vec{v}_2,\vec{v}_3\} is linearly dependent. Remove it.
v4=2v1+v2\vec{v}_4=2\vec{v}_1+\vec{v}_2, so {v1,v2,v4}\{\vec{v}_1,\vec{v}_2,\vec{v}_4\} is linearly dependent.
v5\vec{v}_5 is not a linearly combination of v1\vec{v}_1 or v2\vec{v}_2.
Continue to the end.
We see that there has to be a point where adding a vector makes the set dependent, so the new vector has to be a linear combination of the others, i.e. the vectors before it.

Example 5.7

Are (123),(112),(170)\begin{pmatrix}1\\2\\3\end{pmatrix},\begin{pmatrix}1\\-1\\2\end{pmatrix},\begin{pmatrix}-1\\7\\0\end{pmatrix} linearly independent?


It doesn't matter whether these vectors are the rows or the columns in the matrix.

Rows:
(123v1112v2170v3)ρ1+ρ3ρ1+ρ2(123v1031v2v1093v3+v1)\left(\begin{array}{ccc|c}1&2&3&v_1\\1&-1&2&v_2\\-1&7&0&v_3\end{array}\right)\xrightarrow[\rho_1+\rho_3]{-\rho_1+\rho_2}\left(\begin{array}{ccc|c}1&2&3&v_1\\0&-3&-1&v_2-v_1\\0&9&3&v_3+v_1\end{array}\right)
Clearly the third row is 3-3 times the second. So, these vectors are linearly dependent because v3+v1=3(v2v1)v_3+v_1=-3(v_2-v_1), or
(170)=2(123)3(112)\begin{pmatrix}-1\\7\\0\end{pmatrix}=2\begin{pmatrix}1\\2\\3\end{pmatrix}-3\begin{pmatrix}1\\-1\\2\end{pmatrix}

Columns:
We will show that the linear relationship doesn't change even if the vectors were the columns
The three vectors are
(123), (112), 2(123)3(112)\begin{pmatrix}1\\2\\3\end{pmatrix},\space\begin{pmatrix}1\\-1\\2\end{pmatrix},\space2\begin{pmatrix}1\\2\\3\end{pmatrix}-3\begin{pmatrix}1\\-1\\2\end{pmatrix}
The third vector is expressed as a linear combination to show that the linear relation holds on row operations.
Subtract 2ρ12\rho_1 from ρ2\rho_2
(103), (132), 2(103)3(132)\begin{pmatrix}1\\0\\3\end{pmatrix},\space\begin{pmatrix}1\\-3\\2\end{pmatrix},\space2\begin{pmatrix}1\\0\\3\end{pmatrix}-3\begin{pmatrix}1\\-3\\2\end{pmatrix}
The relationship clearly still holds.
Subtract 3ρ13\rho_1 from ρ3\rho_3
(100), (131), 2(100)3(131)\begin{pmatrix}1\\0\\0\end{pmatrix},\space\begin{pmatrix}1\\-3\\-1\end{pmatrix},\space2\begin{pmatrix}1\\0\\0\end{pmatrix}-3\begin{pmatrix}1\\-3\\-1\end{pmatrix}
Add ρ3\rho_3 to ρ1\rho_1
(100), (031), 2(100)3(031)\begin{pmatrix}1\\0\\0\end{pmatrix},\space\begin{pmatrix}0\\-3\\-1\end{pmatrix},\space2\begin{pmatrix}1\\0\\0\end{pmatrix}-3\begin{pmatrix}0\\-3\\-1\end{pmatrix}
The vectors are now
(100), (031), (293)\begin{pmatrix}1\\0\\0\end{pmatrix},\space\begin{pmatrix}0\\-3\\-1\end{pmatrix},\space\begin{pmatrix}2\\9\\3\end{pmatrix}
This set would be the result of doing row operations, and clearly the relationship still holds. However here, it is very easy to see the exact combination.

Basis

basis of a vector space is a (ordered) sequence of vectors that is linearly independent and spans the space
ordered so you know what coordinates correspond to what vector (i.e. 2,3\langle2,3\rangle is different from 3,2\langle3,2\rangle)
standard (or natural) basis for Rn\mathbb{R}^n is
En=(100),(010),...,(001)\mathcal{E}_n=\left\langle\begin{pmatrix}1\\0\\\vdots\\0\end{pmatrix},\begin{pmatrix}0\\1\\\vdots\\0\end{pmatrix},...,\begin{pmatrix}0\\0\\\vdots\\1\end{pmatrix}\right\rangle
where each component vector is denoted e1,...,en\vec{e}_1,...,\vec{e}_n
A subset (basis is a sequence but is commonly refered to as a set) is a basis iff each vector in the space can be expressed by a linear combination of the basis in a unique way

Proof

A sequence is a basis iff it forms a spanning set and is linearly independent. A subset is a spanning set iff every vector in the span can be expressed in at least one way. So, all that remains is to show that a subset is linearly independent iff every vector in the span can be expressed in at most one way.
Consider two representations of a linear combination of v\vec{v}, adding 0βi0\cdot\vec{\beta}_i if necessary
v=c1β1++cnβn=d1β1++cnβn\vec{v}=c_1\vec{\beta}_1+\cdots+c_n\vec{\beta}_n=d_1\vec{\beta}_1+\cdots+c_n\vec{\beta}_n
This is true iff
(c1d1)β1++(cndn)β=0(c_1-d_1)\vec{\beta}_1+\cdots+(c_n-d_n)\vec{\beta}=0
Since clearly we don't have 0\vec{0} as the basis vectors, and because the basis vectors are linearly independent, we must have ci=dic_i=d_i for all ii, i.e. the coefficients are the same. So, there is only one representation.

i.e. it works like basic coordinates- there's only one way to get (2,5)(2,5) and that is (2,5)(2,5)

in a vector space with basis BB, the representation of v\vec{v} with respect to BB is a column vector of coefficients
RepB(v)=(c1c2cn)B\text{Rep}_B(\vec{v})=\begin{pmatrix}c_1\\c_2\\\vdots\\c_n\end{pmatrix}_B

Since there is only one solution to v=c1β1++cnβn\vec{v}=c_1\vec{\beta}_1+\cdots+c_n\vec{\beta}_n, there is one unique representation

Example 5.6

Consider the basis
E3=(100),(010),(001)\mathcal{E}_3=\left\langle\begin{pmatrix}1\\0\\0\end{pmatrix},\begin{pmatrix}0\\1\\0\end{pmatrix},\begin{pmatrix}0\\0\\1\end{pmatrix}\right\rangle
The representation of v=(123)\vec{v}=\begin{pmatrix}1\\2\\3\end{pmatrix}
is the coordinate (1,2,3)(1,2,3), or using the above notation
RepE3=(123)B\text{Rep}_{\mathcal{E}_3}=\begin{pmatrix}1\\2\\3\end{pmatrix}_B

Example 5.7

The basis B=1+x,1xB=\langle1+x,1-x\rangle spans the space of linear polynomials P1={a+bxa,bR}\mathcal{P}_1=\{a+bx|a,b\in\mathbb{R}\}
The representation can be calculated as a linear combination of the basis vectors a+bx=c1(1+x)+c2(1x)=(c1+c2)+(c1c2)xa+bx=c_1(1+x)+c_2(1-x)=(c_1+c_2)+(c_1-c_2)x
c1+c2=ac1c2=bc_1+c_2=a\\c_1-c_2=b
The solution to this is c1=(a+b)/2c_1=(a+b)/2 and c2=(ab)/2c_2=(a-b)/2, so the coordinate is
RepB(a+bx)=(a+b2ab2)\text{Rep}_B(a+bx)=\begin{pmatrix}\frac{a+b}{2}\\\frac{a-b}{2}\end{pmatrix}